46 research outputs found
Accelerating Machine Learning Queries with Linear Algebra Query Processing
The rapid growth of large-scale machine learning (ML) models has led numerous
commercial companies to utilize ML models for generating predictive results to
help business decision-making. As two primary components in traditional
predictive pipelines, data processing, and model predictions often operate in
separate execution environments, leading to redundant engineering and
computations. Additionally, the diverging mathematical foundations of data
processing and machine learning hinder cross-optimizations by combining these
two components, thereby overlooking potential opportunities to expedite
predictive pipelines.
In this paper, we propose an operator fusing method based on GPU-accelerated
linear algebraic evaluation of relational queries. Our method leverages linear
algebra computation properties to merge operators in machine learning
predictions and data processing, significantly accelerating predictive
pipelines by up to 317x. We perform a complexity analysis to deliver
quantitative insights into the advantages of operator fusion, considering
various data and model dimensions. Furthermore, we extensively evaluate matrix
multiplication query processing utilizing the widely-used Star Schema
Benchmark. Through comprehensive evaluations, we demonstrate the effectiveness
and potential of our approach in improving the efficiency of data processing
and machine learning workloads on modern hardware
Benchmarking Distributed Stream Data Processing Systems
The need for scalable and efficient stream analysis has led to the
development of many open-source streaming data processing systems (SDPSs) with
highly diverging capabilities and performance characteristics. While first
initiatives try to compare the systems for simple workloads, there is a clear
gap of detailed analyses of the systems' performance characteristics. In this
paper, we propose a framework for benchmarking distributed stream processing
engines. We use our suite to evaluate the performance of three widely used
SDPSs in detail, namely Apache Storm, Apache Spark, and Apache Flink. Our
evaluation focuses in particular on measuring the throughput and latency of
windowed operations, which are the basic type of operations in stream
analytics. For this benchmark, we design workloads based on real-life,
industrial use-cases inspired by the online gaming industry. The contribution
of our work is threefold. First, we give a definition of latency and throughput
for stateful operators. Second, we carefully separate the system under test and
driver, in order to correctly represent the open world model of typical stream
processing deployments and can, therefore, measure system performance under
realistic conditions. Third, we build the first benchmarking framework to
define and test the sustainable performance of streaming systems.
Our detailed evaluation highlights the individual characteristics and
use-cases of each system.Comment: Published at ICDE 201
Metadata Representations for Queryable ML Model Zoos
Machine learning (ML) practitioners and organizations are building model zoos
of pre-trained models, containing metadata describing properties of the ML
models and datasets that are useful for reporting, auditing, reproducibility,
and interpretability purposes. The metatada is currently not standardised; its
expressivity is limited; and there is no interoperable way to store and query
it. Consequently, model search, reuse, comparison, and composition are
hindered. In this paper, we advocate for standardized ML model meta-data
representation and management, proposing a toolkit supported to help
practitioners manage and query that metadata
A Survey on the Evolution of Stream Processing Systems
Stream processing has been an active research field for more than 20 years,
but it is now witnessing its prime time due to recent successful efforts by the
research community and numerous worldwide open-source communities. This survey
provides a comprehensive overview of fundamental aspects of stream processing
systems and their evolution in the functional areas of out-of-order data
management, state management, fault tolerance, high availability, load
management, elasticity, and reconfiguration. We review noteworthy past research
findings, outline the similarities and differences between early ('00-'10) and
modern ('11-'18) streaming systems, and discuss recent trends and open
problems.Comment: 34 pages, 15 figures, 5 table
The ViP2P Platform: XML Views in P2P
The growing volumes of XML data sources on the Web or produced by
enterprises, organizations etc. raise many performance challenges for data
management applications. In this work, we are concerned with the distributed,
peer-to-peer management of large corpora of XML documents, based on distributed
hash table (or DHT, in short) overlay networks. We present ViP2P (standing for
Views in Peer-to-Peer), a distributed platform for sharing XML documents based
on a structured P2P network infrastructure (DHT). At the core of ViP2P stand
distributed materialized XML views, defined by arbitrary XML queries, filled in
with data published anywhere in the network, and exploited to efficiently
answer queries issued by any network peer. ViP2P allows user queries to be
evaluated over XML documents published by peers in two modes. First, a
long-running subscription mode, when a query can be registered in the system
and receive answers incrementally when and if published data matches the query.
Second, queries can also be asked in an ad-hoc, snapshot mode, where results
are required immediately and must be computed based on the results of other
long-running, subscription queries. ViP2P innovates over other similar
DHT-based XML sharing platforms by using a very expressive structured XML query
language. This expressivity leads to a very flexible distribution of XML
content in the ViP2P network, and to efficient snapshot query execution. ViP2P
has been tested in real deployments of hundreds of computers. We present the
platform architecture, its internal algorithms, and demonstrate its efficiency
and scalability through a set of experiments. Our experimental results outgrow
by orders of magnitude similar competitor systems in terms of data volumes,
network size and data dissemination throughput.Comment: RR-7812 (2011
Techniques efficaces basées sur des vues matérialisées pour la gestion des données du Web (algorithmes et systèmes)
Le langage XML, proposé par le W3C, est aujourd hui utilisé comme un modèle de données pour le stockage et l interrogation de grands volumes de données dans les systèmes de bases de données. En dépit d importants travaux de recherche et le développement de systèmes efficace, le traitement de grands volumes de données XML pose encore des problèmes des performance dus à la complexité et hétérogénéité des données ainsi qu à la complexité des langages courants d interrogation XML. Les vues matérialisées sont employées depuis des décennies dans les bases de données afin de raccourcir les temps de traitement des requêtes. Elles peuvent être considérées les résultats de requêtes pré-calculées, que l on réutilise afin d éviter de recalculer (complètement ou partiellement) une nouvelle requête. Les vues matérialisées ont fait l objet de nombreuses recherches, en particulier dans le contexte des entrepôts des données relationnelles.Cette thèse étudie l applicabilité de techniques de vues matérialisées pour optimiser les performances des systèmes de gestion de données Web, et en particulier XML, dans des environnements distribués. Dans cette thèse, nos apportons trois contributions.D abord, nous considérons le problème de la sélection des meilleures vues à matérialiser dans un espace de stockage donné, afin d améliorer la performance d une charge de travail des requêtes. Nous sommes les premiers à considérer un sous-langage de XQuery enrichi avec la possibilité de sélectionner des noeuds multiples et à de multiples niveaux de granularités. La difficulté dans ce contexte vient de la puissance expressive et des caractéristiques du langage des requêtes et des vues, et de la taille de l espace de recherche de vues que l on pourrait matérialiser.Alors que le problème général a une complexité prohibitive, nous proposons et étudions un algorithme heuristique et démontrer ses performances supérieures par rapport à l état de l art.Deuxièmement, nous considérons la gestion de grands corpus XML dans des réseaux pair à pair, basées sur des tables de hachage distribuées. Nous considérons la plateforme ViP2P dans laquelle des vues XML distribuées sont matérialisées à partir des données publiées dans le réseau, puis exploitées pour répondre efficacement aux requêtes émises par un pair du réseau. Nous y avons apporté d importantes optimisations orientées sur le passage à l échelle, et nous avons caractérisé la performance du système par une série d expériences déployées dans un réseau à grande échelle. Ces expériences dépassent de plusieurs ordres de grandeur les systèmes similaires en termes de volumes de données et de débit de dissémination des données. Cette étude est à ce jour la plus complète concernant une plateforme de gestion de contenus XML déployée entièrement et testée à une échelle réelle.Enfin, nous présentons une nouvelle approche de dissémination de données dans un système d abonnements, en présence de contraintes sur les ressources CPU et réseau disponibles; cette approche est mise en oeuvre dans le cadre de notre plateforme Delta. Le passage à l échelle est obtenu en déchargeant le fournisseur de données de l effort de répondre à une partie des abonnements. Pour cela, nous tirons profit de techniques de réécriture de requêtes à l aide de vues afin de diffuser les données de ces abonnements, à partir d autres abonnements.Notre contribution principale est un nouvel algorithme qui organise les vues dans un réseau de dissémination d information multi-niveaux ; ce réseau est calculé à l aide d outils techniques de programmation linéaire afin de passer à l échelle pour de grands nombres de vues, respecter les contraintes de capacité du système, et minimiser les délais de propagation des information. L efficacité et la performance de notre algorithme est confirmée par notre évaluation expérimentale, qui inclut l étude d un déploiement réel dans un réseau WAN.XML was recommended by W3C in 1998 as a markup language to be used by device- and system-independent methods of representing information. XML is nowadays used as a data model for storing and querying large volumes of data in database systems. In spite of significant research and systems development, many performance problems are raised by processing very large amounts of XML data. Materialized views have long been used in databases to speed up queries. Materialized views can be seen as precomputed query results that can be re-used to evaluate (part of) another query, and have been a topic of intensive research, in particular in the context of relational data warehousing. This thesis investigates the applicability of materialized views techniques to optimize the performance of Web data management tools, in particular in distributed settings, considering XML data and queries. We make three contributions.We first consider the problem of choosing the best views to materialize within a given space budget in order to improve the performance of a query workload. Our work is the first to address the view selection problem for a rich subset of XQuery. The challenges we face stem from the expressive power and features of both the query and view languages and from the size of the search space of candidate views to materialize. While the general problem has prohibitive complexity, we propose and study a heuristic algorithm and demonstrate its superior performance compared to the state of the art.Second, we consider the management of large XML corpora in peer-to-peer networks, based on distributed hash tables (or DHTs, in short). We consider a platform leveraging distributed materialized XML views, defined by arbitrary XML queries, filled in with data published anywhere in the network, and exploited to efficiently answer queries issued by any network peer. This thesis has contributed important scalability oriented optimizations, as well as a comprehensive set of experiments deployed in a country-wide WAN. These experiments outgrow by orders of magnitude similar competitor systems in terms of data volumes and data dissemination throughput. Thus, they are the most advanced in understanding the performance behavior of DHT-based XML content management in real settings.Finally, we present a novel approach for scalable content-based publish/subscribe (pub/sub, in short) in the presence of constraints on the available computational resources of data publishers. We achieve scalability by off-loading subscriptions from the publisher, and leveraging view-based query rewriting to feed these subscriptions from the data accumulated in others. Our main contribution is a novel algorithm for organizing subscriptions in a multi-level dissemination network in order to serve large numbers of subscriptions, respect capacity constraints, and minimize latency. The efficiency and effectiveness of our algorithm are confirmed through extensive experiments and a large deployment in a WAN.PARIS11-SCD-Bib. électronique (914719901) / SudocSudocFranceF
SiMa: Effective and Efficient Matching Across Data Silos Using Graph Neural Networks
How can we leverage existing column relationships within silos, to predict
similar ones across silos? Can we do this efficiently and effectively? Existing
matching approaches do not exploit prior knowledge, relying on prohibitively
expensive similarity computations. In this paper we present the first technique
for matching columns across data silos, called SiMa, which leverages Graph
Neural Networks (GNNs) to learn from existing column relationships within data
silos, and dataset-specific profiles. The main novelty of SiMa is its ability
to be trained incrementally on column relationships within each silo
individually, without requiring the consolidation of all datasets in a single
place. Our experiments show that SiMa is more effective than the - otherwise
inapplicable to the setting of silos - state-of-the-art matching methods, while
requiring orders of magnitude less computational resources. Moreover, we
demonstrate that SiMa considerably outperforms other state-of-the-art column
representation learning methods
Stateful Entities: Object-oriented Cloud Applications as Distributed Dataflows
Programming stateful cloud applications remains a very painful experience.
Instead of focusing on the business logic, programmers spend most of their time
dealing with distributed systems considerations, with the most important being
consistency, load balancing, failure management, recovery, and scalability. At
the same time, we witness an unprecedented adoption of modern dataflow systems
such as Apache Flink, Google Dataflow, and Timely Dataflow. These systems are
now performant and fault-tolerant, and they offer excellent state management
primitives.
With this line of work, we aim at investigating the opportunities and limits
of compiling general-purpose programs into stateful dataflows. Given a set of
easy-to-follow code conventions, programmers can author stateful entities, a
programming abstraction embedded in Python. We present a compiler pipeline
named StateFlow, to analyze the abstract syntax tree of a Python application
and rewrite it into an intermediate representation based on stateful dataflow
graphs. StateFlow compiles that intermediate representation to a target
execution system: Apache Flink and Beam, AWS Lambda, Flink's Statefun, and
Cloudburst. Through an experimental evaluation, we demonstrate that the code
generated by StateFlow incurs minimal overhead. While developing and deploying
our prototype, we came to observe important limitations of current dataflow
systems in executing cloud applications at scale
Leveraging Large Language Models for Sequential Recommendation
Sequential recommendation problems have received increasing attention in
research during the past few years, leading to the inception of a large variety
of algorithmic approaches. In this work, we explore how large language models
(LLMs), which are nowadays introducing disruptive effects in many AI-based
applications, can be used to build or improve sequential recommendation
approaches. Specifically, we devise and evaluate three approaches to leverage
the power of LLMs in different ways. Our results from experiments on two
datasets show that initializing the state-of-the-art sequential recommendation
model BERT4Rec with embeddings obtained from an LLM improves NDCG by 15-20%
compared to the vanilla BERT4Rec model. Furthermore, we find that a simple
approach that leverages LLM embeddings for producing recommendations, can
provide competitive performance by highlighting semantically related items. We
publicly share the code and data of our experiments to ensure reproducibility.Comment: 9 page
Valentine: Evaluating Matching Techniques for Dataset Discovery
Data scientists today search large data lakes to discover and integrate
datasets. In order to bring together disparate data sources, dataset discovery
methods rely on some form of schema matching: the process of establishing
correspondences between datasets. Traditionally, schema matching has been used
to find matching pairs of columns between a source and a target schema.
However, the use of schema matching in dataset discovery methods differs from
its original use. Nowadays schema matching serves as a building block for
indicating and ranking inter-dataset relationships. Surprisingly, although a
discovery method's success relies highly on the quality of the underlying
matching algorithms, the latest discovery methods employ existing schema
matching algorithms in an ad-hoc fashion due to the lack of openly-available
datasets with ground truth, reference method implementations, and evaluation
metrics. In this paper, we aim to rectify the problem of evaluating the
effectiveness and efficiency of schema matching methods for the specific needs
of dataset discovery. To this end, we propose Valentine, an extensible
open-source experiment suite to execute and organize large-scale automated
matching experiments on tabular data. Valentine includes implementations of
seminal schema matching methods that we either implemented from scratch (due to
absence of open source code) or imported from open repositories. The
contributions of Valentine are: i) the definition of four schema matching
scenarios as encountered in dataset discovery methods, ii) a principled dataset
fabrication process tailored to the scope of dataset discovery methods and iii)
the most comprehensive evaluation of schema matching techniques to date,
offering insight on the strengths and weaknesses of existing techniques, that
can serve as a guide for employing schema matching in future dataset discovery
methods